26 research outputs found

    Conceptual Views on Tree Ensemble Classifiers

    Full text link
    Random Forests and related tree-based methods are popular for supervised learning from table based data. Apart from their ease of parallelization, their classification performance is also superior. However, this performance, especially parallelizability, is offset by the loss of explainability. Statistical methods are often used to compensate for this disadvantage. Yet, their ability for local explanations, and in particular for global explanations, is limited. In the present work we propose an algebraic method, rooted in lattice theory, for the (global) explanation of tree ensembles. In detail, we introduce two novel conceptual views on tree ensemble classifiers and demonstrate their explanatory capabilities on Random Forests that were trained with standard parameters

    Once‐more scattered next event estimation for volume rendering

    Get PDF

    Detecting Bias in Monte Carlo Renderers using Welch’s t-test

    Get PDF
    When checking the implementation of a new renderer, one usually compares the output to that of a reference implementation. However, such tests require a large number of samples to be reliable, and sometimes they are unable to reveal very subtle differences that are caused by bias, but overshadowed by random noise. We propose using Welch’s t-test, a statistical test that reliably finds small bias even at low sample counts. Welch’s t-test is an established method in statistics to determine if two sample sets have the same underlying mean, based on sample statistics. We adapt it to test whether two renderers converge to the same image, i.e., the same mean per pixel or pixel region. We also present two strategies for visualizing and analyzing the test’s results, assisting us in localizing especially problematic image regions and detecting biased implementations with high confidence at low sample counts both for the reference and tested implementation

    Scaling Dimension

    Full text link
    Conceptual Scaling is a useful standard tool in Formal Concept Analysis and beyond. Its mathematical theory, as elaborated in the last chapter of the FCA monograph, still has room for improvement. As it stands, even some of the basic definitions are in flux. Our contribution was triggered by the study of concept lattices for tree classifiers and the scaling methods used there. We extend some basic notions, give precise mathematical definitions for them and introduce the concept of scaling dimension. In addition to a detailed discussion of its properties, including an example, we show theoretical bounds related to the order dimension of concept lattices. We also study special subclasses, such as the ordinal and the interordinal scaling dimensions, and show for them first results and examples

    Path Guiding with Vertex Triplet Distributions

    Get PDF
    Good importance sampling strategies are decisive for the quality and robustness of photorealistic image synthesis with Monte Carlo integration. Path guiding approaches use transport paths sampled by an existing base sampler to build and refine a guiding distribution. This distribution then guides subsequent paths in regions that are otherwise hard to sample. We observe that all terms in the measurement contribution function sampled during path construction depend on at most three consecutive path vertices. We thus propose to build a 9D guiding distribution over vertex triplets that adapts to the full measurement contribution with a 9D Gaussian mixture model (GMM). For incremental path sampling, we query the model for the last two vertices of a path prefix, resulting in a 3D conditional distribution with which we sample the next vertex along the path. To make this approach scalable, we partition the scene with an octree and learn a local GMM for each leaf separately. In a learning phase, we sample paths using the current guiding distribution and collect triplets of path vertices. We resample these triplets online and keep only a fixed-size subset in reservoirs. After each progression, we obtain new GMMs from triplet samples by an initial hard clustering followed by expectation maximization. Since we model 3D vertex positions, our guiding distribution naturally extends to participating media. In addition, the symmetry in the GMM allows us to query it for paths constructed by a light tracer. Therefore our method can guide both a path tracer and light tracer from a jointly learned guiding distribution

    Ordinal Motifs in Lattices

    Full text link
    Lattices are a commonly used structure for the representation and analysis of relational and ontological knowledge. In particular, the analysis of these requires a decomposition of a large and high-dimensional lattice into a set of understandably large parts. With the present work we propose /ordinal motifs/ as analytical units of meaning. We study these ordinal substructures (or standard scales) through (full) scale-measures of formal contexts from the field of formal concept analysis. We show that the underlying decision problems are NP-complete and provide results on how one can incrementally identify ordinal motifs to save computational effort. Accompanying our theoretical results, we demonstrate how ordinal motifs can be leveraged to retrieve basic meaning from a medium sized ordinal data set

    Estimating Local Beckmann Roughness for Complex BSDFs

    Get PDF
    International audienceMany light transport related techniques require an analysis of the blur width of light scattering at a path vertex, for instance a Beck-mann roughness. Such use cases are for instance analysis of expected variance (and potential biased countermeasures in production rendering), radiance caching or directionally dependent virtual point light sources, or determination of step sizes in the path space Metropolis light transport framework: recent advanced mutation strategies for Metropolis Light Transport [Veach 1997], such as Manifold Exploration [Jakob 2013] and Half Vector Space Light Transport [Kaplanyan et al. 2014] employ local curvature of the BSDFs (such as an average Beckmann roughness) at all interactions along the path in order to determine an optimal mutation step size. A single average Beckmann roughness, however, can be a bad fit for complex measured materials (such as [Matusik et al. 2003]) and, moreover, such curvature is completely undefined for layered materials as it depends on the active scattering layer. We propose a robust estimation of local curvature for BSDFs of any complexity by using local Beckmann approximations, taking into account additional factors such as both incident and outgoing direction

    Compressive Higher-order Sparse and Low-Rank Acquisition with a Hyperspectral Light Stage

    Get PDF
    Compressive sparse and low-rank recovery (CSLR) is a novel method for compressed sensing deriving a low-rank and a sparse data terms from randomized projection measurements. While previous approaches either applied compressive measurements to phenomena assumed to be sparse or explicitly assume and measure low-rank approximations, CSLR is inherently robust if any such assumption might be violated. In this paper, we will derive CSLR using Fixed-Point Continuation algorithms, and extend this approach in order to exploit the correlation in high-order dimensions to further reduce the number of captured samples. Though generally applicable, we demonstrate the effectiveness of our approach on data sets captured with a novel hyperspectral light stage that can emit a distinct spectrum from each of the 196 light source directions enabling bispectral measurements of reflectance from arbitrary viewpoints. Bispectral reflectance fields and BTFs are faithfully reconstructed from a small number of compressed measurements
    corecore